Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Support Bernoulli op on ONNX front-end #13802

Merged
merged 19 commits into from
Jan 27, 2023

Conversation

vvchernov
Copy link
Contributor

Support Bernoulli op on ONNX front-end

@tvm-bot
Copy link
Collaborator

tvm-bot commented Jan 18, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@vvchernov vvchernov force-pushed the vc/bernoulli branch 2 times, most recently from 441626b to 02c0960 Compare January 18, 2023 12:47
@vvchernov vvchernov changed the title WIP: [ONNX] Support Bernoulli op on ONNX front-end [ONNX] Support Bernoulli op on ONNX front-end Jan 18, 2023
Copy link
Contributor

@KJlaccHoeUM9l KJlaccHoeUM9l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

assert in_dtype in [
"float32",
"float64",
], "Only float input tensor is currently supported."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ONNX has support for float16. However, this type is not supported here. Maybe it's worth pointing out the reason (TODO) why this data type is not currently supported at the ONNX front-end level?

@vvchernov
Copy link
Contributor Author

vvchernov commented Jan 23, 2023

Hello @AndrewZhaoLuo! Could you review this PR? One more question: do you know what means expanded part in test names ("test_bernoulli_expanded", "test_bernoulli_double_expanded", "test_bernoulli_seed_expanded")? I'm ready to support these tests, but do not know what should they do.

Copy link
Contributor

@echuraev echuraev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks

@AndrewZhaoLuo
Copy link
Contributor

AndrewZhaoLuo commented Jan 24, 2023

IIRC the expanded tests simply expand the node into things that can be written in terms of existing ops.

E.g.
If we had an “FMA” node in ONNX which does output = in1 * in2 + in3 , the FMA_test would be just an FMA node but the FMA_expanded_test will be a multiplication node and an addition node.

So in FMA_test we would hit the FMA node converter, while for FMA_expanded_test we would hit the multiply and addition node converters

https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/test/onnx/main.cc#L699 If we look here, the bernoulli tests are listed as broken so 🤷

Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for most part, just some comments on testing

tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually going to block for now, due to concern about flakes. at least until @octoJon chimes in.

@vvchernov
Copy link
Contributor Author

Hello @AndrewZhaoLuo and @octoJon! I think CI will be green soon, Could you recheck updated test part?

tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
tests/python/frontend/onnx/test_forward.py Outdated Show resolved Hide resolved
@AndrewZhaoLuo
Copy link
Contributor

LGTM, just a few final nits.

tvm.testing.assert_allclose(inputs, tvm_out)
else:
# check that mean value is close to the theoretical one by binomial test
bnm_test_res = scipy.stats.binomtest(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Strictly speaking, this test is only appropriate when the input probabilities are all identical. I think it's guaranteed to be over-conservative in cases where the input probabilities are not identical, like when you call verify_bernoulli(shape=[1000]). (By "over-conservative", I mean that the true rate of flaky failures will be less than the nominal 1e-6 implied by the p-value threshold in the test.) Reference for that claim: https://stats.stackexchange.com/a/510798

I would recommend just documenting this with a comment in the code. Something like:

This test is strictly appropriate when input probabilities are all identical. In that case, it should lead to flaky failures in only one run in a million.
The test should be over-conservative when input probabilities are not identical. (i.e., It should have a rate of flaky failures lower than one run in a million.) If this test starts repeatedly throwing flaky failures, consult a statistician in addition to your regular debugging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, god. I didn't read that reference link closely enough. It contains a really blatantly misogynistic comment. Apologies to anyone I exposed to that. Please don't include that reference link in the code.

(I think the math is right, though.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @octoJon! I've modified the test due to it was already "over-conservative" with p-value threshold = 1e-6. I've increased threshold to 0.05 as more classical approach. If test condition failed there are two cases: something wrong in the operation or we have gotten "bad" output sequence on the tail of distribution. Due to the last is rare case and should be rechecked I repeat the test again (and third time if need) with new seed for internal distribution (input is the same).
P.S. As you know RandomUniform and RandomNormal already were implemented on TVM side. Possibly their CI tests should be also updated to testing stability and avoiding of flaky failures.

@vvchernov
Copy link
Contributor Author

Hello @AndrewZhaoLuo! Please, recheck, discuss with Jon if need and merge if can

Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@AndrewZhaoLuo AndrewZhaoLuo merged commit 1bc8cf8 into apache:main Jan 27, 2023
@vvchernov vvchernov deleted the vc/bernoulli branch January 28, 2023 06:49
fzi-peccia pushed a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
* add Bernoulli converter for onnx front-end

* test for bernoulli was implemented

* fix tuple split. update test for stability with different seed on ort and tvm sides

* check that output values are 0 or 1

* remove std check as meaningless

* calculate theoretical mean and compare with result, remove ort for comparison. clean code

* add customized input as arg

* add test with input sequence of 0 and 1

* pylint fix

* fix inputs-shape issue

* add binomial test

* fix input type

* small fix

* update 0-1 check

* init arrays in numpy style

* check result determinism for fixed seed

* fix inputs issue

* modify binomial test

* pylint fix

---------

Co-authored-by: Valery Chernov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants